Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Most attention in K-12 artificial intelligence and machine learning (AI/ML) education has been given to having youths train models, with much less attention to the equally important testing of models when creating machine learning applications. Testing ML applications allows for the evaluation of models against predictions and can help creators of applications identify and address failure and edge cases that could negatively impact user experiences. We investigate how testing each other's projects supported youths to take perspective about functionality, performance, and potential issues in their own projects. We analyzed testing worksheets, audio and video recordings collected during a two week workshop in which 11 high school youths created physical computing projects that included (audio, pose, and image) ML classifiers. We found that through peer-testing youths reflected on the size of their training datasets, the diversity of their training data, the design of their classes and the contexts in which they produced training data. We discuss future directions for research on peer-testing in AI/ML education and current limitations for these kinds of activities.more » « less
-
null (Ed.)The cancellation factor (CF) is a model for the ratio between gravity wave perturbations in the nightglow intensity to those in the ambient temperature. The CF model allows us to estimate the momentum and energy flux of gravity waves seen in nightglow images, as well as the divergence of these fluxes due to waves propagating through the mesosphere and lower thermosphere region, where the nightglow and the Na layers are located. This study uses a set of wind/temperature Na lidar data and zenith nightglow image observations of the OH and O(1S) emissions to test and validate the CF model from the experimental perspective. The dataset analyzed was obtained during campaigns carried out at the Andes Lidar Observatory (ALO), Chile, in 2015, 2016, and 2017. The modeled CF was compared with observed CF values calculated using the ratio of wave amplitude in nightglow images to that seen in lidar temperatures for vertically propagating waves. We show that, in general, the modeled CF underestimates the observed CF results. However, the O(1S) emission line has better agreement with respect to the modeled value due to its supposedly simpler nightglow photochemistry. In contrast, the observed CF for the OH emission deviates by a factor of two from the modeled CF asymptotic value.more » « less
-
Abstract Much attention in constructionism has focused on designing tools and activities that support learners in designing fully finished and functional applications and artefacts to be shared with others. But helping students learn to debug their applications often takes on a surprisingly more instructionist stance by giving them checklists, teaching them strategies or providing them with test programmes. The idea of designing bugs for learning—ordebugging by design—makes learners agents of their own learning and, more importantly, of making and solving mistakes. In this paper, we report on our implementation of ‘Debugging by Design’ activities in a high school classroom over a period of 8 hours as part of an electronic textiles unit. Students were tasked to craft the electronic textile artefacts with problems or bugs for their peers to solve. Drawing on observations and interviews, we answer the following research questions: (1) How did students participate in making bugs for others? (2) What did students gain from designing and solving bugs for others? In the discussion, we address the opportunities and challenges that designing personally and socially meaningful failure artefacts provides for becoming objects‐to‐think‐with and objects‐to‐share‐with in student learning and promoting new directions in constructionism. Practitioner notesWhat is already known about this topicThere is substantial evidence for the benefits of learning programming and debugging in the context of constructing personally relevant and complex artefacts, including electronic textiles.Related, work on productive failure has demonstrated that providing learners with strategically difficult problems (in which they ‘fail’) equips them to better handle subsequent challenges.What this paper addsIn this paper, we argue that designing bugs or ‘failure artefacts’ is as much a constructionist approach to learning as is designing fully functional artefacts.We consider how ‘failure artefacts’ can be both objects‐to‐learn‐with and objects‐to‐share‐with.We introduce the concept of ‘Debugging by Design’ (DbD) as a means to expand application of constructionism to the context of developing ‘failure artifacts’.Implications for practice and/or policyWe conceptualise a new way to enable and empower students in debugging—by designing creative, multimodal buggy projects for others to solve.The DbD approach may support students in near‐transfer of debugging and the beginning of a more systematic approach to debugging in later projects and should be explored in other domains beyond e‐textiles.New studies should explore learning, design and teaching that empower students to design bugs in projects in mischievous and creative ways.more » « less
An official website of the United States government
